I\’ve mentioned before that scaling performance in PC games to lower-end systems is very important. Upon reading Sheba\’s post on the same subject, I thought it would be apt to pen down some facts on why scaling is a hard problem, and my thoughts on why more developers don\’t do it.

As always, I\’ll try and keep technical details to a minimum.

Pretty much all the important functionality in PC graphics these days is implemented in hardware. Your graphics card is a circuit board that specializes in manipulating vectors, performing matrix operations, calculating light reflections and shading pixels. This has more or less been the state of affairs since 2000 or so, when the first nVidia GeForce hit the market. That said, newer cards don\’t just outstrip older ones performance-wise – they often have completely new features that the older cards are either incapable of pulling off, or need to work very hard to achieve. For example, the GeForce 8 series of cards is capable of doing anti-aliasing (smoothing out jagged edges) and high dynamic range lighting (a much more realistic simulation of light interacting with the environment, seen in games like Bioshock) simultaneously in hardware. The older GeForce 7 series can\’t do this using the same method, but it is possible to pull off the same effect using programming tricks (the Source engine does this, allowing me to enjoy the beautiful HDR lighting in Half-Life 2: Episode Two).

This, of course, is a problem for programmers, who have to make sure the game executes different code depending on which graphics card you might have. And this is the crux of the problem – simultaneously writing code for several types of hardware at once. Some of this is abstracted away by software layers (sch as DirectX) but even then, there are differences as to which version of DirectX a card supports. When you consider just how many different models of graphics card are out there right now (as well as how many versions of DirectX need to be supported), the magnitude of this problem becomes clear. Console developers don\’t have this problem because they only have to target one configuration – not that they\’re above royally cocking things up when they can (see the Xbox 360 release of Bully, or any number of buggy PlayStation 3 releases for examples of this).

We\’ve therefore established that this is a hard problem. Given that, a possible reason why developers don\’t do their due diligence for all PC ports becomes pretty clear. Most big-name titles these days are multi-platform affairs – that is to say, they also make it to a major console as well as to the PC. To keep development schedules in sync, developers tend to do a quick port of the code that has been written for the console platform – aimed at a fixed target, thus eschewing scalability of any kind. Why do you think the PC port of Assassin\’s Creed required a dual-core processor? The 360 and PS3 are both machines designed from the ground up for multiple cores, and I suspect the port was quick and dirty, leading to this entirely unreasonable requirement on end users.

I really think that problems like this would go away (or at least become less of an issue) if developers built their engines on the PC first, made them completely scalable, and then proceeded to port the engines to the various consoles. Valve has certainly done this with the Source engine, and gamers as a whole are better off for it. It may add to the time needed for development, but I\’m perfectly fine with a later game if it means more people get to play it. I\’m hearing some promising noises in this area – Capcom\’s main internal engine was developed on a PC first before being ported to consoles – but we probably have some way to go before sanity reasserts itself among developers.

2 thoughts on “The scaling question”
  1. “…before sanity reasserts itself among developers.”

    Thing is, what may be considered sanity to programmers and PC gamers is not always considered sanity to accountants. Retooling an engine to be scalable on the vast range of PC hardware available requires lots of recoding and testing on a wide range of available hardware. Doing that will require even more money to be spent on a project, with absolutely no guarantee that the potential sales of the PC version of the game will sell enough copies to cover that extra cost.

    It’s not simply a technical problem for games, but a financial one as well. Keep in mind Valve can afford to make their engine scalable out of the wazoo since they’re rolling in the money from Half-Life, Steam and related products; other devs may have a set budget from their publisher that may make gearing their game to the PC unattractive financially.

    1. It’s true that it takes a lot of resources to make an engine properly scalable, but it certainly isn’t impossible. Ironclad made the Sins engine scale quite well, and I doubt they’re anywhere near as well-endowed financially as Valve or Blizzard are.

      Heck, I bet Ubisoft could have spent the money making Assassin’s Creed run on machines without dual core CPUs, at least. It’s not like they’re an indie developer or something.

Leave a Reply

Your email address will not be published. Required fields are marked *